Concave Learners for Rankboost
نویسندگان
چکیده
Rankboost has been shown to be an effective algorithm for combining ranks. However, its ability to generalize well and not overfit is directly related to the choice of weak learner, in the sense that regularization of the rank function is due to the regularization properties of its weak learners. We present a regularization property called consistency in preference and confidence that mathematically translates into monotonic concavity, and describe a new weak ranking learner (MWGR) that generates ranking functions with this property. In experiments combining ranks from multiple face recognition algorithms and an experiment combining text information retrieval systems, rank functions using MWGR proved superior to binary weak learners.
منابع مشابه
Boosting First-Order Clauses for Large, Skewed Data Sets
Creating an e ective ensemble of clauses for large, skewed data sets requires nding a diverse, high-scoring set of clauses and then combining them in such a way as to maximize predictive performance. We have adapted the RankBoost algorithm in order to maximize area under the recall-precision curve, a much better metric when working with highly skewed data sets than ROC curves. We have also expl...
متن کاملAn EÆcient Boosting Algorithm for Combining Preferences
The problem of combining preferences arises in several applications, such as combining the results of di erent search engines. This work describes an eÆcient algorithm for combining multiple preferences. We rst give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an eÆcient implementation of t...
متن کاملAn Efficient Boosting Algorithm for Combining Preferences
The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementati...
متن کاملMargin-based Ranking and an Equivalence between AdaBoost and RankBoost
We study boosting algorithms for learning to rank. We give a general margin-based bound for ranking based on covering numbers for the hypothesis space. Our bound suggests that algorithms that maximize the ranking margin will generalize well. We then describe a new algorithm, smooth margin ranking, that precisely converges to a maximum ranking-margin solution. The algorithm is a modification of ...
متن کاملCFD Study of Concave Turbine
Computational fluid dynamics (CFD) is a powerful numerical tool that is becoming widely used to simulate many processes in the industry. In this work study of the stirred tank with 7 types of concave blade with CFD was presented. In the modeling of the impeller rotation, sliding mesh (SM) technique was used and RNG-k-ε model was selected for turbulence. Power consumption in various speeds in th...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Journal of Machine Learning Research
دوره 8 شماره
صفحات -
تاریخ انتشار 2007